Skip to main content

Full text of "BSTJ 50: 10. December 1971: Some Considerations of Error Bounds in Digital Systems. (Prabhu, V.K.)"

See other formats


Copyright © 1971 American Telephone and Telegraph Company 

The Bell System Technical Journal 

Vol. 50, No. 10, December, 1971 

Printed in U.S.A. 



Some Considerations of Error Bounds 
in Digital Systems 

By V. K. PRABHU 

(Manuscript received June 22, 1971) 

Simple upper and lower bounds on the distribution function of the 
sum of two random variables are presented in terms of the marginal distribu- 
tion functions of the variables. These bounds are then used to obtain upper 
and lower bounds to the error probability of a coherent digital system in the 
presence of inter symbol interference and additive gaussian noise. The 
bounds are expressed in terms of the error probability obtained with a 
finite pulse train, and the bounds to the marginal distribution function of 
the residual pulse train. Since the difference between the upper and lower 
bounds can be shown to be a monotonically decreasing function of the 
number of pulses in the finite pulse train, the bounds can be used to compute 
the error probability of the system with arbitrarily small error. Also when 
the system performance is evaluated by simulation techniques, the methods 
presented in our paper can be utilized to estimate the error caused by using 
a finite pulse train approximation. 

I. INTRODUCTION 

Iii digital transmission systems the transfer characteristics of the 
transmitting and receiving filters are far from ideal, and the real trans- 
mission channel usually exhibits some form of time dispersion. 1,2 When 
an ideal digital signal is passed through such filters or is transmitted 
through such a channel, the successive pulses overlap; this form of 
distortion is usually known as intersymbol interference. Intersymbol 
interference may also result from the choice of nonoptimum sampling 
instants, imperfect demodulating-carrier phase, improper pulse design, 
etc. In addition the signal may be corrupted by thermal noise, co- 
channel and adjacent channel interference, and other forms of noise 
that may be present in the channel or in the system used to transmit 
the information. 

In digital transmission systems, one of the main performance char- 

3127 



3128 THE BELL SYSTEM TECHNICAL JOURNAL, DECEMBER 1971 

acteristics is the probability of error; this probability of error can often 
be expressed as a finite weighted sum of one or more distribution 
functions. 

Various authors have tried to evaluate this probability of error by a 
variety of methods, 2-14 but this highly complex probability distribution 
can seldom be exactly computed. 

Simulation techniques that may be used to solve this and other similar 
problems are never exact since one is constrained to use only a finite 
number of pulses and no bounds to the truncation error have been 
derived.* 

Another method is an analysis by means of a worst-case or "eye 
pattern" analysis. Since the probability of occurrence of a worst sequence 
may be very small, this analysis usually leads to very pessimistic results 
and suboptimum system design. 

Recently, some authors have derived 7-9 several different upper 
bounds on the probability of error when the system is subject to both 
intersymbol interference and additive gaussian noise. Some of these 
bounds make use of the Chernoff inequality in their derivation, and 
hence are often more useful than the worst-case bound. 8 However, 
since these bounds, in certain cases, can be shown to be loose, 11 and 
since no useful lower bounds have been derived, they are not as useful 
in system design as the evaluation of the exact error rate of the system. 

The third method consists in using the finite pulse train approxima- 
tion and calculating the error probability either by the direct enumera- 
tion of all possible sequences 2 or by the series expansion method. 10-11 
The series expansion method, which involves the computation of the 
moments of the intersymbol interference, is a convenient method but 
is still inexact as no truncation error bounds due to the residual pulse 
train have been derived. Note that in this method the number of terms 
in the finite pulse train is gradually increased until the change in 
probability of error is less than a given number e. 11 

In this paper we first present simple upper and lower bounds to the 
distribution function of the sum of two random variables z N and z R 
in terms of their marginal distribution functions. If the spread or dis- 
persion 1 '' of the random variable z R is smaller than the spread of the 



* In simulation techniques the number N of pulses are usually chosen so that the 
computed probability of error stops changing by less than « when the number N is 
increased by 1. Noting that the series £1°° l/ n diverges, and that the difference 
between two successive partial sums of this series can be made less than any given 
number e, one concludes that this technique of choosing N is mathematically un- 
sound. 



ERROR BOUNDS IN DIGITAL SYSTEMS 3129 

random variable z N , one can show that these two bounds are fairly 
close to each other and that one can evaluate the distribution function 
of the sum of the variables in terms of the distribution function of z N 
and the bounds on the distribution function of z R . 

We then use these bounds to obtain upper and lower bounds on the 
error probability of a binary coherent digital system in the presence 
of intersymbol interference and additive gaussian noise. Since the 
difference between the upper and lower bounds can be shown to be a 
monotone decreasing function of the number N of pulses in the finite 
pulse train, the bounds can be used to compute the error probability 
of the system with arbitrarily small error. 

Also when the system performance is evaluated by simulation tech- 
niques, the methods presented in our paper can be utilized to estimate 
the error caused by using a finite pulse train approximation. 

If the symbols are equally likely, we also show that another set of 
upper and lower bounds can be derived for the probability of error of a 
system subject to intersymbol interference and additive gaussian noise. 

The usefulness of the bounds is illustrated by two examples. 

II. DISTRIBUTION FUNCTION .VXD ITS EVALUATION 

Let us assume that a random variable z is the sum of two random 
variables z N and z R , 

z = z N + z R , (1) 

and that we are interested in the distribution function of z 

F z (a) = Pr [z g a] = Pr [z N + z R S a]. (2) 

In this section we shall also assume that z N and z R are statistically 
independent random variables. 

The probability of error of a large number of digital systems subject 
to various forms of noise can often be expressed as a weighted sum of 
F t (a)'s. If z is the sum of an infinite number of random variables, and 
if z N represents its partial sum of the first N terms, we sometimes can 
evaluate F tll (a), but F z (a) can seldom be computed exactly. In such a 
case it is often advantageous to obtain upper and lower bounds to 
F z (a) in terms of F tli (a) and some known parameters associated with 
the random variable z R , the sum of the remaining terms in z. If the 
difference between the two bounds is a strictly monotone-decreasing 
function of N, we can then calculate F M (a) with arbitrarily small error. 

Without loss of generality we shall assume that the mean of z R is 



3130 THE BELL SYSTEM TECHNICAL JOURNAL, DECEMBER 1971 

zero. From (2) we can write (see Fig. 1) 

F.(fl) = f f ' ' U„,n(x,y)dxdy (3) 

J— oo J— oo 

if the joint probability density function f lN . ZR (x, y) exists; and 

FM = P f " dF lN (x) dF.M = f° F.,(o - y) dF.^y), (4) 



or 



F.(a) = <F rA -(a - y.,))., . 



(5) 



Let us now select an interval (— Al, Aw) from the range of the random 
variable z R . From (4) we can write 



where 



and 



F.(fi) = h + h + h 

/i - /" Al F„(a - ?/) dF..^). 

I 2 m f ' F lN (a - y) dF lR {y), 

I a = I" F 1N (a-y)dF, R {y)- 



(6) 

(7) 
(8) 

(9) 




Fig. 1 — Distribution function of z = z N + z R . 



ERROR BOUNDS IN DIGITAL SYSTEMS 3131 

One can show (see Fig. 2) that 

Ol/.lf "dF.M = F !R (-Al), (10) 

J-oo 

^ /, g F IV (a - Aw) \* dF.M = F z . v (a - Aw){l - F JR (Aw)} 

^,»(a+ AOfl -F ZR (Aw)}, (11) 
h ^ F lN (a - Au) f " dF.M = F,„(a - Aw){F ZR (Aw) - F ZR (-Al)}, 

and 

7 2 ^ F JV (a + A/) f AU dF.M = F m9 {a + AZ){F JR (Aw) - F !R {-Al)). 

Combining (6) with (10)-(13), we have 
F tN (a - Au)[F 1R (Au) - F !R (-Al)] 

^ F.(a) ^ F lR (-Al) + F tK (a + Al)[\ - F IR {-Al)} 
<F lR (-Al) + F lN (a+ Al). (14) 

In general it is not easy to compute F, K (y). However we may be able 
to bound F t R (y) so that 

^ F !R (-Al) = Pr [z R S -Al] ^ L !R (-AI) rg 1, (15) 

Ogl- F lR (Aa) = Pr [z R > Aw] ^ U lR (Au) g 1, (16) 

and 

1 ^ ^ R (Aw) - F ZR (-Al) = Pr [-AZ < z R ^ Aw] 

^ 1 - L.. R (-AZ) - U !R (Au) ^ 0. (17) 

If these bounds can be found, (14)-(17) can be made to yield 
F. w (a - Aw)[l - L !R (-Al) - *7 IR (Aw)] ^ FM 

£ F mm (a+ A0 + L 2R (-A0. (18) 

These are the basic bounds that we shall use in the rest of this paper. 

If the mass of the distribution of z R is very much concentrated 
around y = 0, our technique of computing F z (a) from (IS) relies on 
the assumption that we can find two numbers Aw and Al such that 



3132 THE BELL SYSTEM TECHNICAL JOURNAL, DECEMBER 1971 



Fz N (») ■ 




i 


/- 





i 



(a) 



F zp(y) 



(b) 



-M Au 



Fig. 2a — Distribution function F lfl (x). 
Fig. 2b — Distribution function F, R (y). The interval ( — Al, Au) is contained in 
the range of z R , and for all practical purposes the mass of z R is contained in ( — Al, 
Au). 



Am « | a | , Al « | a | , L ZR (-Al) « F, N (a), U lR (Au) « F MK (a), and 
F, N (a - Aw) &F ts (a+ Al). 

The difference D(Au, Al) between the upper and lower bounds can 
be written as 

D(Au, Al) = Pr [a - Au < z N ^ a + Al] 

+ F. K (a - Au)[L lR (-Al) + U, R (Au)} + L IB (-Al). (19) 

If Am and A£ can be so chosen that they are strictly monotone-decreasing 
functions of N, Am — » 0, Al -+ 0, &s N —> °o , and if the bounds on the 
distribution of z R are such that, for sufficiently large N, L, K (—Al) 
and £/ zr (Am) can be made smaller than any given number e, , we can 
estimate F r (a) from (IS) with arbitrarily small error.* 

For any given N even though Am and Al can be chosen by optimizing 
the bounds in (18), this optimization leads to very complex equations. 
Hence we think that an algorithm should be developed to choose Am 
and Al for any given z N and z R . The development of this algorithm 
will be illustrated by an example in Section IV. 

* We assume that Pr [a - < a N ^ a + 0] = 0. 



ERROR BOUNDS IN DIGITAL SYSTEMS 3133 

2.1 Lower Bound Evaluation with Convex F zs (a) 

We shall now derive a simpler lower (upper) bound to FM if F,„(a) 
is a convex (concave) function and if z R is an even random variable, 

or 

F,(-t) = 1 - F.(0. (20) 

From (20) one can show that the mean m of z is zero, and that its 
probability density f z (t), if it exists, satisfies the equation . 

U-0 - 1M (2D 

ll'z R is an even random variable, we shall set Aw = Al in (18). 

Let us now assume that z R is an even random variable and that F ZN {a) 
is convex over the range [a — Aw, a + Aw) where (—Aw, Aw) is the 
range of z H . Since z R is an even random variable 

FM = (F.Aa - ¥..)>., = (*Ua + y.„)) ma (22) 

or 

*» = <*[^,(fl " V.m) + ^(a + V..)]).. • (23) 

Since F ZN (o) is convex over the range (a — Aw, a + Aw)/ 6 

f[F,,(a - s/.J + F..(a + y..)] ^ F.M- (24) 

From (23) and (24) we have 

F.(a) ^ F MN (a). (25) 

Since this bound does not contain Aw and A/, it is simpler to calculate 
than that given in (18). It is also tighter than the lower bound in (18). 
In this case we then have 

F tK (a) ^ FM ^ F MN (a + Al) + L..C-A0. (26) 

If F ZN (a) is concave over the domain (a — Al, a + Aw) and if z R 
is an even random variable, we can similarly show that 

F.M ^ FM ^ F !S ,(a - Au)[l - L ZH (-Al) - tf.,(Au)]. (27) 

2.2 Evaluation of Another Upper Bound to F z (a) 

Often we find that z contains a gaussian random variable n and can 
be written as 

z = n + w N + z R = z s + z R , 2.v = n + i% , (28) 

where n, w. v , and z R are statistically independent random variables. 



3134 THE BELL SYSTEM TECHNICAL JOURNAL, DECEMBER 1971 

We have already assumed that the mean of z R is zero. Without loss 
of generality we shall now assume that the mean of n is zero, and its 
variance is a 2 . 

From (28) one can show that 



'-«- 1 <«*(=*-)> 



.V ! 



(29) 



where 



erfc (aj) ■ — 7= / exp (-f) dt. (30) 

V 7T J x 



Hence we have 17,18 



F, N (a) = i erfc (^j) + -L exp [-a 2 /2<7 2 ] 

• £ (- l) t /r 4 . 1 (-a/aV§)(l/avS)W*.', (3D 

where H k (x) is the fcth order Hermite polynomial and p k is the /cth 
moment of w N , 

= [" x k dF WN (x). (32) 

J — efl 



Pk 



If the range ( — ft, , ft u ) of m? w is finite and if ft denotes the maximum 
absolute value that can be attained by w N , we can show that 

I M* + . I ^ m k tf, k^0,s^0, (33) 

ro* = /"|a |*dF ww (a:). (34) 

j-00 

rn A is called the /cth absolute moment of it-v • 

If the first K moments are used in estimating F ts ,(a) from (31), the 
truncation error T K is given by 

T K = 4=exp(-a 2 /2<T 2 ) f) (-l^-^-a/aVSXl/a^W*!. (35) 

V7T A = A'+1 

Since it can be shown 19 that 

I #„(*) I < b2 n/ Wn\ exp (« 2 /2), b & 1.086435, (36) 

one can show from (35) -(36) that 



ERROR BOUNDS IN DIGITAL SYSTEMS 3135 



\T K \< (b/ V2t) exp (-a 2 /V) ^ (g+ i)VgT 

Tl-^=T 1 -^=<1- (37) 

L <tVk+ iJ (tVk + i 

From (31) and (37) one may observe that F zv (a) may be estimated 
with as great an accuracy as desired if the range of w N is finite and if 
the moments of iv N are known. 

If iv N is an even random variable, we can also show that 10,18 

M2*-, =0, k ^ 1 (38) 

F,M = \ erfc (-a/aV2) + -^= exp (-a 2 /2a 2 ) 

V7T 

• J] H 2 U-a/aV2)(l/<rV2r,x 2k /(2k) ! (39) 

A = l 
IT. I / & ftw / 2 I A 2s M2K (fi/0-) 2 

I 1 2K | < — 7= exp (—a /4o- ) ~ 



2t V* (2tf + 2) V(2K + 1)! 



•[1 - (Q/a)*/V(2K + 2)(2K + 3)]" 1 , 

(0/<r)'/[(2K + 2)(2X + 3)] 1/2 < 1. (40) 
By using the inequality 19 

I H 2k „(t) I g I i I exp (* 2 /2)(2A; + 2)!/(fc + 1)!, (41) 

we can also show that 

I T 2K | ^ -±±L exp (-a 2 /4. 2 ) ^ ^^ [1 - (0/«r) 8 /(X + 2)]" 1 , 

(n/<r)'/(ff + 2) < 1. (42) 
If 2 B is an even random variable, we have 

FM = Kerfc [(-a + x wm + 2/« J /*V2"] )„„.,„ 

+ Kerfc [(-a + x„ - y„)/oVS]>„.„ . (43) 
Since one can show (see Appendix A) that 

\ erfc (x + X) + i erfc (x - X) ^ erfc (x), x ^ 0, (44) 
we can write 

F.(o) ^ fterfc [(-a + .T„,, v )/cr\^])„, v , -a + x„ w ^ 0, 

= F.», -a + x.„ ^ 0, V x„ w . (45) 



3136 THE BELL SYSTEM TECHNICAL JOURNAL, DECEMBER 1971 



Now from (28) we can write 

where 

Vw N ,zn = erfc {xi), x t = (-a 4- x WN + ij ZR )/a\/2, 



(46) 



= -7= r cxp {-(« + 2/ JR /a\/2) 2 } ds, .r 2 = (-a + x WN )/<r\/2. 
Vt J*. (47) 



Since 



we have 



exp [-fa J<rV2) 2 } £ 1, V 2/ JR , 
WpIg ^ —p= / exp [-s 2 - s(V2/a)y !R ] ds, 

V 7T ■'i. 



F.(f,) ^-T^\] exp [-s 2 - s(V27<r)i/ zfi ] ds 



exp (-s 2 )* ZB (-sV^A) ds 



Vr V* 



where 



*„(0 = P exp (ly) dF.,(y) 

V— CO 



is the moment-generating function of the random variable z R . 
If we can find two numbers m R and <r R such that 



(48) 

(49) 
(50) 
(51) 

(52) 
(53) 



*,,(*) ^ exp [ton* + ^ 2 /2], V t, 
one can show from (51) that 
F.(a) S B,Aa, m R , <r 2 R ) = (1 - a 2 R /a 2 )~ U2 exp [mV{2cr 2 (l - * R /<r 2 )\] 



-x \ erfc 



cr 2 «/<r 2 <l. (54) 



— a + ro B /(l — o-fl/o- 2 ) + x WA . 

rtl/2/1 2/ 2\-l/2 

0-2 (1 — a R /a ) 

The derivation of the upper bound in (54) is based on results given 
in Ref . 20. 

In this case we then have 

F,M ^ F.(a) ^ B. H (fi, m R , a 2 R ), 

-a + x„. x ^ 0, alia 2 < 1, V x WN . (55) 



ERROR BOUNDS IN DIGITAL SYSTEMS 3137 

Since the lower bound in (55) may not be valid if — a + x wti can be 
nonpositive for some value of x ws , and if the maximum absolute value 
of x m „ is a monotone-increasing function of N, we note that there is 
an upper bound N max to N that can be used in estimating the lower 
bound in (55). If this upper bound iV m:ix < oo, we may not be able to 
estimate F^a) from (55) with arbitrarily small error. However if 
there is no finite upper bound to N such that —a + x WN is nonpositive 
(system with an "open eye pattern") and if | m R | and a R are strictly 
monotone-decreasing functions of N, it is clear that we can estimate 
F t (a) from (55) with any desired accuracy. 

III. BOUNDS ON THE TAILS OF PROBABILITY DISTRIBUTIONS 

To use the bounds given in (18), it is necessary to determine L lR (Al) 
and U gR (ku). There are several methods (including numerical methods) 
of determining these parameters, and here we shall discuss two of them. 

From Chebyshev-Bienayme bounds 15 ' 21 we have 

Pr[z«±S -M] ^^S (56) 

Pr [z R > At*] ^ ^£ , (57) 



where 

Hence we can set 



GO.. = (vS). (58) 



L. K (-a) = U, tt («) = %*K (59) 



Also in communication problems, bounds of the Chernoff type have 
been used on the tails of the probability distributions, and these Chernoff 
bounds are often tighter than the Chebyshev-Bienayme bounds. 7_9,21_23 

One can show 23 that 

Pr \z R £ -AQ ^ exp (-X A/)(exp (-Xy.J) 

= exp (-X A/)* JS (-X), X ^ 0, (60) 

Pr [z R > Am] ^ exp (-X Au)<p zr (X), X ^ 0. (61) 

The parameter X is arbitrary and is chosen so as to optimize the bounds 
in (60) and (61). 

If we can find two functions \f/ z K (— X) and ^ R (X) such that 

^ *.,(-A) ^ f.,(-X), X ^ 0, (62) 



3138 THE BELL SYSTEM TECHNICAL JOURNAL, DECEMBER 1971 

and 

^ * lR (\) ^ *.«(X), X ^ 0, (63) 

and then optimize exp (—X Al)\f/ 2R ( — \) and exp ( — X Au)ty lR (\), we 
can make 

L lR (-Al) = exp (-X opt A0*,.(-A opt ), (64) 

and 

U 1R (Au) = exp (-X opt Au)*, K (K»t). (65) 

The functions $ ZR ( — X) and ¥,,(X) are often chosen so that (64) and 
(65) have the desired functional forms for optimization. 7,9,24 From 
(52) one may note that it is not necessary to determine (explicitly) 
$ tR (\) to get $z R { — X) and ^ Jfi (X). Bounds can be used to determine 
these functions. Also one may make use of the semi-invariant moment- 
generating function of z R in determining ^ tR { — X) and ^ ZR (X). 
If z R is an even random variable, note also that 

*..(-*) = *.,(X), X £ 0, (66) 

and we can make 

*..(-X) = *.,(X), X ^ 0, (67) 

L.,(-«) = CT.,(a) - exp (-aX opt )* IR (X opt ). (68) 

IV. ERROR BOUNDS WITH INTERSYMBOL INTERFERENCE AND ADDITIVE 
GAUSSIAN NOISE 

The methods presented in Section II are now applied to the analysis 
of a binary coherent digital system subject to intersymbol interference 
and additive gaussian noise. Various methods have been proposed to 
evaluate this error probability. 1-11 They provide either an upper bound 
to the error rate or error rate with a finite pulse train approximation. 

Let us now assume that the signal at the input to the receiver detector 
(see Fig. 3) can be represented as 

'At) = E a lV {t - IT) + n{i), (69) 

where n(t) is a gaussian random variable with mean zero and variance a 2 . 
We shall also assume that (a,) is a sequence of independent random 
variables, and a, = ±1 with equal probability. 

If the zeroth transmitted symbol is a a = 1 and if it is detected by 



ERROR BOUNDS IN DIGITAL SYSTEMS 



3139 



ADDITIVE GAUSSIAN NOISE 



sa k 5(t-kT) 



TRANS- 
MITTER 
FILTER 
T(f) 



CHANNEL 
C(f) 




sa h p(t-kT) + n(t) 



Fig. 3 — Simplified block diagram of a coherent digital communication system. 
C(f ), T(f), and R(f ) denote respectively the transfer functions of the channel, and 
transmitting and receiving filters. T is the signaling interval. 



sampling y(t) at t = t , we can show that 

y(Q = p(to) + E' a<P(to ~ IT) + n(t ), (70) 

where E' does not include the term / = 0. Assuming that the slicing 
level of the system is zero, and that there are no other imperfections in 
the system, we can show that the probability of error P 2 can be written as 

P 2 = Pr [n + £' a lVl < -p„], (71) 

where 

Pl = | p(t - IT) |, (72) 

and 

n = n(t ). (73) 

Without loss of generality we shall now reorder sequence [p t } in 
such a way that the terms of the sequence are nonincreasing with 
increasing /, and let us denote this new sequence by {?•*). Hence we can 
write 



or 



P a " Pr I n + £ a k r k < -p 

P 2 = Pr[ 2 < -p ] = F.i-po), 
z = n H- J] a kTk = z N + z R , 

U = U,2,3, 



Z N m n + 2 a * r * > 

kits 



,N], 



w N = z N — n, 

Zr = E a kT k ■ 

kt{N c 



(74) 

(75) 

(76) 

(77) 

(78) 
(79) 



3140 



THE BELL SYSTEM TECHNICAL JOURNAL, DECEMBER 1971 



Since z N and z R are statistically independent random variables, (18) 
gives bounds to F,(— p ). Let us first determine F lN (a) where a = 
— p + Aw or a = —p — Al. Methods given in Section 2.2 can be 
used in determining F ZA .(a).* We would like to note here that (31) 
must be used in determining F lN (a) when +1 and —1 do not occur 
with equal probability. 

The recurrence relation given in Ref. 10 to calculate the even order 
moments n 2n 's is to be used with care since the summation in the recur- 
rence relation contains both positive and negative terms. In Appendix B 
we give another recurrence relation to compute m2j's (and fhi+i'8, 1 ^ 0). 
Since the new recurrence relation for jx 2n 's contains the summation of 
positive terms only, we consider this method of computing /x 2 „'s pref- 
erable to that given in Ref. 10. 

We used (31) and our new method for computing /i2„'s to calculate 

F.M- 

We shall now determine L tR (—Al) and U lR (Au). Since z R is an 

even random variable, we will set Aw = Al, L lR ( — Aw) = U, R ( Aw) . 
Also one can show 7 that 



*„(X) = II cosh Ar* 



kt{N° 



^ exp X 2 r, + — 2 r? , A + A c = £ N . 

L lt\ " liA c J 



(80) 



From (68) we have 



U. R (Au) = exp 



:Aw - ]•>,: 



2i>; 



aw- l>, £ o, ac r^- 



(81) 



Equation (18) now yields 



F. N (-Po ~ Aw) 



[Aw- En] 1 



1 - 2 exp ) - 



2 5>; 

^ F,(-Po) £ F 1N {- Vo + Aw) + exp 



[Aw- 5>J 



(82) 



For any given N, an optimum Aw can be chosen to minimize the 
difference between the upper and lower bounds in (82). This is often 



* Other methods (including simulation) can also be used in determining F, N (a). 



ERROR BOUNDS IN DIGITAL SYSTEMS 3141 

found to be difficult and tedious and relies heavily on the search methods 
given in Ref. 7. 

Here we assume that 

A = f ,v , (S3) 

and we write 

F tN (-p - Aw)[l - 2 exp {-(Aw) 2 /2/3 2 „}] 

^ FX-Vn) ^ F,A-Po + Aw) + exp [-(Au)*/2&], (84) 

P\= 2> 2 . (85) 

r.v c 

Note that any number /3* ^ XI a »'? can be used in computing the 
bounds in (84). This may be done to simplify computing ^ A r] . 
The difference D N (Au, Aw) between the upper and lower bounds 
can be written as 

D N (Au, Am) = Pr [— p n - Aw < z N ^ — p„ + Aw] 

+ exp [-(A«)720J]{1 + 2F ZN (-p - Aw)). (86) 

Since /3, 2 is a strictly monotone-decreasing function of N, D N (Au, Aw) 
can be made smaller than any given number e. Hence we can calculate 
FA-Vo) from (84). 

Several different algorithms can be developed to compute F z (—Po)- 
One of our algorithms is as follows. Let us assume that we have to 
calculate F,( — Po) with a fractional error less than e t . 

Since F t (~Po) ^ 1, we assume that there exists an N such that 

|F„(-po) - F., + .(-Po)l < *» (87) 

where 

e 2 ^ }«, min (F ljr (-po), F -jr+I (-po)|. (88) 

For this iV we calculate /?„ and choose Aif so that 

exp[-(Aw) 2 /2/3 2 ] = e 2 /3. (89) 

We then calculate D N (Au, Aw) and compare it with 

X. v = «,F„(-po - Aw)[l - 2 exp {-(Aw) 2 /20- (90) 

We increase N so that 

D N ,{Au, Aw) ^ x ,< , N' ^ N. (91) 

It is not necessary to increase N in steps of one. The step size can be 
chosen to suit particular examples. 



3142 THE BELL SYSTEM TECHNICAL JOURNAL, DECEMBER 1971 

From (18) and (91) we can write 

A N ,(-p ) ^ F.(-po) ^ B n ,(-Po), (92) 

A N .(-Po) - F.A-Po - A«)[l - 2 exp { - (Atf/2fi \ ] , (93) 

BA-Po) - F.,.(-j^ + Am) + exp { - (Aw) 2 /2/3 2 « } , (94) 

B N .(- Vo ) - AA-Po) ^ tiAA-Po). (95) 

It is evident from (92) and (95) that F z (—p ) is equal to A N .(— p ) 
or B N >(—p ) with an error less t x . 

We have programmed this algorithm on a digital computer and we 
have been very successful in evaluating F z (— p ) from this algorithm. 

4.1 Applications 

Let us now assume that p{t) is obtained by passing a square pulse 
through a single-pole RC-filter or that 

p(t) = 0, t < (96) 

p(t) = 1 - exp (-2irWt), ^ t ^ T, (97) 

p(0 = exp [-2wW(t - T)} - exp [-2rWt\, t ^ T. (98) 

For this pulse we can write 

p = 1 - exp (-2irWto), £t ^ T, (99) 

and 

r k = [1 - exp (-2irWT)] exp [-27rTT{i + (* - l)T\), 

k ^ 1. (100) 

For 2PFT = 0.5, and t = T, we plot in Fig. 4 F,(-p ) with an 
error less than 0.2 percent. In this figure we also plot N', the number 
of terms required in estimating F^—po). F, N (—p ) is calculated from 
(31) with a truncation error of less than 0.01 percent. 

Let us now consider the ideal bandlimited pulse p(t) where 

Pit) = *^f , (101) 

Po = ^ , 8 = t T < 1, t > 0, (102) 

TO 



ERROR BOUNDS IN DIGITAL SYSTEMS 



3143 



10" 



o 

a. 10" 



10" 



m 
< 

gia 



10" 
10" 



NUMBER OF TERMS N' 
6 6 7 7 7 7 






13 15 17 19 21 23 

SIGNAL -TO- NOISE RATIO IN DECIBELS 



25 



Fig. 4 — Probability of error of binary coherent digital system with intersymbol 
interference and additive gaussian noise. The received pulse is an exponential pulse, 
and 2WT = 0.5. The upper bound By(— p») is plotted in this figure and N was 
increased in steps of one. [B N -(—p ) — F l ( — po)]/F t ( — p ) < 0.002. The truncation 
error is less than 0.01 percent. 



We shall assume that we take an even number of terms in w N in estimat- 
ing F.A-Po)- 
We have 



l=2N+l 

^ sin 2 t8 [ 1_ 1 "| 

" i-fe tt 2 L(* " 8) 2 + (k + 5) 2 J 

< _0+ 3 2 ) sin 2 7T3 [V f .-"I _ , 



(105) 



Since aj is more easily computed than /3j , we shall use a^ in (84). 

For 5 = 0.05 we plot in Fig. 5, F z (—p ) with an error less than 50 
percent when F 2 (-p ) ^ 2 X 10~° and less than 100 percent when 
F : (—Po) < 2 X 10"°. In this figure we also plot N' the number of 
terms required in estimating F z ( — p ). Since a| is a slowly decreasing 
function of N, the number of terms required for estimating F,(—p ) 
is much larger than that in the earlier example. 

Since z N contains a gaussian random variable and since z R is an even 



3144 



THE BELL SYSTEM TECHNICAL JOURNAL, DECEMBER 1971 



random variable, (55) can also be used to obtain upper and lower 
bounds to F g (—Po)- Equations (53) and (80) can be shown to yield 

(106) 



m R 



""A 



= 2>. 

A 

= x>? 



(107) 



By choosing A c = £„ , we obtain the bounds given in Ref. 20. 

Here we would like to note that the relative merits of the two sets 
of bounds cannot be compared as the bounds in (55) may not be appli- 
cable when the system has a closed eye pattern. The lower bound in 
(55) can be shown to be tighter than that in (18) but is not applicable 
to a system with a noneven z R . The random variable z R is noneven 
if + 1 and — 1 do not occur with equal probability. From the point of 
view of computation, tightness, and applicability, we think that specific 
problems should determine the set of bounds best suited to them. 

The extension of this analysis to m-ary coherent digital systems, 
m > 2, and binary coherent phase-shift keyed systems is obvious from 



10-1 



K 10- 

o 

CE 

a. 

LU 

feio- 

> 

2 io- 

m 
< 

03 

O 10" 
cc 

a. 

10" 
10" 

10" 



NUMBER OF TERMS N x I0" 3 
7 7 7 7.2 7.4 7.6 7.8 8 8 



7 9 11 13 15 17 

SIGNAL- TO -NOISE RATIO IN DECIBELS 



19 



Fig. 5 — Probability of error of binary coherent digital system with intersymbol 
interference and additive gaussian noise. The received pulse is an ideal bandhmited 
pulse, and it is sampled at U, t T = 0.05. The upper bound B N '(—p ) is plotted in 
this figure and N was increased in steps of 100. [B N >( — po)— Fi( — Po)]/F z ( — po) < 
0.5, F.(-po) > 2 X 10- 6 , [B N .(-po)-F,(-p )]/F,(-p ) < 1, F.(-p ) < 2 X 
10~ 6 . The truncation error is less than 0.1 percent. 



ERROR BOUNDS IN DIGITAL SYSTEMS 



3145 




, m 

/dP(s)=Pr[7 



Fig. 6 — Distribution function F,(a) = Pr[z N + z R < a\. 

Refs. 7 and 9. The analysis for higher-order phase-shift keyed systems 

needs extensive modification and will be treated in a future publication. 
V. DISTRIBUTION FUNCTION F ,{a) WITH ARBITRARY Z N AND Z R 

Consider two one-dimensional random variables z N and z R . The 
joint probability distribution of z N and z R is a distribution in (R 2 , or a 
two-dimensional distribution. 

Now the probability distribution of z = z N + z R is given by (see 
Fig. 6) 

F z {a) = Pr [z N + z R g a] 

= [ dP(S), (x,y)tQ if x + yS a (108) 

Jq 

and P(S) is the probability function of z N and z R . ,5 P(S) represents 
the probability of the relation (x, y) C S. 
Since dP(S) ^ 0, note that (see Fig. 7) 

[ dP(S) ^ f dP(S) + [ dP(S), (x, y)tQ l if x ^ a + M, 

Jq Jq 1 Jq, 

{x,y)tQ 2 if y ^ -M. (109) 



3146 THE BELL SYSTEM TECHNICAL JOURNAL, DECEMBER 1971 




Fig. 7 — Upper bound on distribution function F,(a). 



Since 

f dP(S) = Pr[2 ff ga+ AZ] = F.„(fl + AZ), (110) 

f dP(S) = Pr[z R ^ -AZ], (111) 

F.(a) ^ F lN (a + AZ) + Pr [z R ^ - AZ]. (112) 
Also we have (see Fig. 8) 

f dP(S) ^ f dP(S) - f dP(S), (x, y)tQ 3 if x g a - Au, 

(x, y)tQ i if y ^ Au. (113) 



Since 



( dP(S) = Pr [z N ^ a - Au] = F tN {a - Am), (114) 

f dP(S) = Pr[z R ^ Au], (115) 



F z (a) ^ F tN {a - Au) - Pr [z R ^ Aw]. 



(116) 



ERROR BOUNDS IN DIGITAL SYSTEMS 



3147 



From (112) and (116) we can write 

F, N (a - Aw) - Pr [z R ^ Aw] 

^ F,(a) ^ F IN {a + Al) + Pr [z R g - M}. (117) 

Equation (117) is valid even when z N and z R are statistically dependent 
random variables. 

If the distribution of z R is very much concentrated around some 
point y = y , it was shown in Sections II and IV that F t (a) can be 
evaluated with arbitrarily small error if z N and z R are statistically in- 
dependent random variables and if we can bound F lR (\). If z N and z R 
are statistically dependent random variables, equation (117) shows 
that the same techniques can be used to compute F,(a) if the distribu- 
tion of z R is very much concentrated around some point y = y . 

VI. CONCLUSIONS 

We have presented simple upper and lower bounds on the distribution 
function of the sum of two random variables in terms of the marginal 
distribution functions of the variables. 

We have also derived several other bounds when one of the random 
variables is a gaussian random variable or when one of the distribution 
functions is convex or concave. 




Fig. 8 — Lower bound on distribution function F t (a). 



3148 THE BELL SYSTEM TECHNICAL JOURNAL, DECEMBER 1971 

These bounds are then applied to the error rate analysis of a binary 
coherent digital system subject to intersymbol interference and additive 
gaussian noise. Since the difference between the upper and lower 
bounds is a monotone decreasing function of the number of pulses in 
the finite pulse train, the bounds can be used to compute the error 
probability with arbitrarily small error. Application of these bounds 
is illustrated by two examples. Relative merits of the bounds are also 
briefly discussed. 

Many other applications including the analysis of co-channel and 
adjacent channel interference in communication systems will be evident 
to the reader. Some such novel applications will be given in a future 
publication. 

APPENDIX A 

Let us write 

0(a) = | erfc (x + «) + \ erfc (x - a). (118) 

If x = 0, one can easily show that 

\ erfc (a) + § erfc (-a) = 1 - erfc (x). (119) 

We shall now assume that x ?* 0. Also since G(a) is an even function 
of a, we shall consider a ^ 0. From (31) and (118) we can write 

<?'(«) = 4= [exp (-(* - a) 2 } - exp {-(x + a) 2 }]. (120) 
Vt 

Note that G'(0) = and that there are no other finite stationary 
points of G(a), x j* 0. Further one can show that 

G'(a) > 0, x > 0, a > 0. (121) 

G'(a) < 0, x < 0, a > 0. (122) 

From (118), (121), and (122) we then have 

\ erfc (x + a) + \ erfc (x - a) ^ erfc (x), x ^ 0, (123) 

\ erfc (x + a) + \ erfc (x - a) g erfc (*), x g 0. (124) 

For the sake of completeness we would like to note here that erfc (x) 
is a convex function for x ^ and is concave for x ^ 0. Hence we 
can also show that 

p erfc (x + a) + (1 — p) erfc (x — a) 

^ erfc (x), x + a ^ 0, x - a ^ 0, Ogp^l, (125) 



ERROR BOUNDS IN DIGITAL SYSTEMS 3149 

and 

p erfc (x + a) + (1 — p) erfc (x — a) 

^ erfc (x), x + a ^ 0, x - a ^ 0, ^ p ^ 1. (126) 

Observe that (125) is not sufficient to prove (123). 

APPENDIX B 

Let 77* denote the partial sum E*=i £, where 

v>„ = £ fc » ( 12 7) 

i = l 

and £/s are statistically independent random variables. From (32) and 
(127) we can write 

h = (O - ».W (128) 

where 

0»(*) = <tf>. n ^ 1- 0o« = 1. (129) 

MO = <h*-i + eJ"), * > 1. (130) 

*,(*) = E fcW - 1K-,(*)i * > l. (131) 

p-0 \pf 

«.-.(*) - <€T*>, «.(*) - 1, ft £ 1. (132) 

*„(1) = (tf> = <£> = «-(!). (133) 

and since we shall assume that all a n - p (k)'s are known or can be eval- 
uated, we have a recurrence relation in (131) to compute n„ . 

Often £ t 's are even random variables, and in this case we can show 
that 

fHi+i = 0, 1^0, (134) 

M 2n = e 2n (N), (135) 

S 3n (k) = £ feW - 1)«».-^)- (136) 



Now 



or 



where 



Since 



3150 THE BELL SYSTEM TECHNICAL JOURNAL, DECEMBER 1971 

The recurrence relation (136) contains only the sum of positive terms, 
and hence can easily be used to compute ju 2 „'s. 

In Section IV, % k = a k r k , pu+t = 0, I ^ and a 2i (k) = r\ l ,i ^ 0. 
All even order moments of w N can therefore be easily calculated from 
(136). 

In Refs. 18 and 25 methods have been developed to calculate n in of 
the random variable 6 where 

K 

e = 2 R, cos e, (137) 

and 0/s are independently distributed random variables uniformly 
distributed over the range [0, 2x) . Most of these methods use an infinite 
series expansion, and often the accuracy obtained from these methods 
is questionable. 25 

Noting that we can set £,- = Rj cos 0, , p 2 i+i = 0, I ^ 0, and 

« 2 ,(i) = <(B, cos $,)") 

or 

«2,0') = RY ^2 , (138) 

all even order moments n 2 n's can be calculated by using (136) and (138). 
This method of calculating ^ 2n 's can be shown to be analogous to 
that given in Ref. 26 and is preferable to that in Refs. 18 and 25. 

REFERENCES 

1. Lucky, R. W., Salz, J., and Weldon, E. J., Jr., Principles of Dale Communication, 

New York: McGraw-Hill Book Co., 1968, pp. 59-63. 

2. Aaron, M. R., and Tufts, D. W., "Intersymbol Interference and Error Prob- 

ability," IEEE Trans. Inform. Theory, IT-12, January 1966, pp. 26-34. 

3. Aein, J. M., and Hancock, J. C, "Reducing the Effects of Intersymbol Inter- 

ference with Correlation Receivers," IEEE Trans. Inform. Theory, IT-9, July 
1963, pp. 167-175. 

4. Calandruno, L., Crippa, G., and Immovilli, G., "Intersymbol Interference in 

Binary and Quaternary PSK and DCPSK Systems," Alta Frequenza, 38, 
May 1969, pp. 337-344. 

5. Tufts, D. W., "Summary of Certain Intersymbol Interference Results," IEEE 

Trans. Inform. Theory, IT-10, October 1964, p. 380. 

6. Saltzberg, B. R., "Error Probabilities for Binary Signal Perturbed by Inter- 

symbol Interference and Gaussian Noise," IEEE Trans. Com. Sys., CS-12, 
March 1964, pp. 17-120. 

7. Saltzberg, B. R., "Intersymbol Interference Error Bounds with Application to 

Ideal Bandlimited Signaling," IEEE Trans. Inform. Theory, IT-U, July 
1968, pp. 563-568. 

8. Lugannani, R., "Intersymbol Interference and Probability of Error in Digital 

Systems," IEEE Trans. Inform. Theory, IT-15, November 1970, pp. 686- 



ERROR BOUNDS IX DIGITAL SYSTEMS 3151 

9. Prabhu, V. K., "Performance of Coherent Phase-Shift Keyed Systems with 
Intersymbol Interference," IEEE Trans. Inform. Theory, IT-17, No. 4 
(July 1971), pp. 418-431. 

10. Ho, E. Y., and Yeh, Y. S., "A New Approach for Evaluating the Error Prob- 

ability in the Presence of Intersymbol Interference and Additive Gaussian 
Noise," B.S.T.J., 49, No. 9 (November 1970), pp. 2249-2266. 

11. Celebiler, M. I., and Shimbo, O., "Intersymbol Interference Considerations in 

Digital Communications," ICC Record, June 1970, pp. 8.1-8.10. 

12. Shimbo, O., and Celebiler, M. I., "The Probability of Error Due to Intersymbol 

Interference and Gaussian Noise in Digital Communication Systems," IEEE 
Trans, on Com. Tech., COM-19, No. 2 (April 1971), pp. 113-119. 

13. Mullins, J. H., "A Computer Program Simulating the Two-Level Millimeter 

Wave Waveguide Communication System," unpublished work. 

14. Tu, P. J., "A Study of the System Impairments for the Two-Level FM-DCPSK 

WT-4 Transmission System," unpublished work. 

15. Cramer, H., Mathematical Methods of Statistics, Princeton University Press, 

Princeton, N. J., 1957. 

16. Wagner, H. M., Principles of Operations Research, Englewood Cliffs, N. J.: 

Prentice-Hall, 1969, pp. 592-596. 

17. Pagones, M. J., "Error Probability Upper Bound of a Coherently Detected 

PSK Signal Corrupted by Interference and Gaussian Noise," unpublished 
work. 

18. V. K. Prabhu, "Error Rate Considerations for Coherent Phase-Shift Keyed 

Systems with Co-Channel Interference," B.S.T.J., 48, No. 3 (March 1969), 
pp. 743-767. 

19. Abramowitz, M., and Stegun, I. A., Handbook of Mathematical Functions, 

Washington, D. C: National Bureau of Standards, 1967, p. 787. 

20. Yeh, Y. S., and Ho, E. Y., "Improved Intersymbol Interference Error Bounds 

in Digital Systems," B.S.T.J., 50, No. 8 (October 1971), pp. 2585-2598. 

21 . Algazi. V. R., "Bounds on the Spectra of Angle-Modulated Waves," IEEE Trans. 

Com. Tech., COM-16, No. 4 (August 1968), pp. 561-566. 

22. Chernoff, H., "A Measure of Asymptotic Efficiency for Tests of a Hypothesis 

Based on a Sum of Observations," Ann. Math. Stat., 23, April 1952, pp. 493- 
507. 

23. Bennett, G., "On the Probability of Large Deviations from the Expectation 

for Sums of Bounded, Independent, Random Variables," Biometrika, 50, 
1963, pp. 528-535. 

24. Prabhu, V. K., "Error-Probability Upper Bound for Coherently Detected PSK 

Signals with Co-Channel Interference," Elec. Letters, 5, August 1969, pp. 
383-385. 

25. Rosenbaum, A. S., "Binary PSK Error Probabilities with Multiple Co-Channel 

Interferences," IEEE Trans. Com. Tech., COM-18, No. 3 (June 1970), pp. 
241-253. 

26. Goldman, J., "Moments of the Sum of Circularly Symmetric Random Variables," 

to be published.